Two-step estimation in linear regressions with adaptive learning
نویسندگان
چکیده
Weak consistency and asymptotic normality of the ordinary least squares estimator in a linear regression with adaptive learning is derived when crucial, so-called, ‘gain’ parameter estimated first step by nonlinear from an auxiliary model.
منابع مشابه
First step immersion in interval linear programming with linear dependencies
We consider a linear programming problem in a general form and suppose that all coefficients may vary in some prescribed intervals. Contrary to classical models, where parameters can attain any value from the interval domains independently, we study problems with linear dependencies between the parameters. We present a class of problems that are easily solved by reduction to the classi...
متن کاملLearning Mixtures of Linear Regressions with Nearly Optimal Complexity
Mixtures of Linear Regressions (MLR) is an important mixture model with many applications. In this model, each observation is generated from one of the several unknown linear regression components, where the identity of the generated component is also unknown. Previous works either assume strong assumptions on the data distribution or have high complexity. This paper proposes a fixed parameter ...
متن کاملConsistent Low - Complexity Estimation of Activeparameters in Large Linear Regressions
Some important practical signals and systems can be modeled by very large linear regression models where it is reasonable that most of the parameters are zero. We give an eecient method to solve this combined estimation and structure determination problem. It is related to Akaike-like criteria, and is based on one LMS lter and thus it is of low complexity. Asymptotic analysis shows that the met...
متن کاملSemiparametric estimation of a mixture of two linear regressions in which one component is known
A new estimation method for the two-component mixture model introduced in Vandekerkhove (2012) is proposed. This model, which consists of a two-component mixture of linear regressions in which one component is entirely known while the proportion, the slope, the intercept and the error distribution of the other component are unknown, seems to be of interest for the analysis of large datasets pro...
متن کاملOnline Learning with Adaptive Local Step Sizes
Almeida et al. have recently proposed online algorithms for local step size adaptation in nonlinear systems trained by gradient descent. Here we develop an alternative to their approach by extending Sutton’s work on linear systems to the general, nonlinear case. The resulting algorithms are computationally little more expensive than other acceleration techniques, do not assume statistical indep...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Statistics & Probability Letters
سال: 2023
ISSN: ['1879-2103', '0167-7152']
DOI: https://doi.org/10.1016/j.spl.2022.109761